09. Hyperparameter Tuning & Regularization
Hyperparameter Tuning & Regularization Heading
Hyperparameter Tuning and Regularization
ND320 C4 L3 11 Hyperparameter Tuning
Hyperparameter Tuning & Regularization
We ended the last concept with a classification accuracy of 77%. However, there are a few more ways we can turn to improve the performance.
We at first used our best guesses but now we can explore the space and see if we can improve the performance. We found that reducing the maximum tree depth to 2, we have significantly increased our classification accuracy, from 77% to 89%. By reducing the depth to 2, we are regularizing our model. Regularization is an important topic in ML and is our best way to avoid overfitting. This is why we see an increase in the cross-validated performance.
But, we used the entire dataset many times to figure out the optimal hyperparameters. In some sense, this is also overfitting. Our 90% classification accuracy is likely too high, and not the generalized performance. In the next concept, we can see what our actual generalized performance might be if we use our dataset to optimize hyperparameters.